机器学习已成功应用于系统应用程序(如内存预取和缓存),其中已知模型已显示出优于HeuRistics。然而,缺乏了解这些模型的内部工作 - 可解释性 - 仍然是在现实世界部署中采用的主要障碍。了解模型的行为可以帮助系统管理员,开发人员在模型中获得信心,了解生产中的风险和调试意外行为。计算机系统中使用的模型的可解释性构成了特定的挑战:与在图像或文本上培训的ML模型不同,输入域(例如,内存访问模式,程序计数器)不是立即解释的。因此,一项重大挑战是在对人类从业者达成的概念方面解释该模型。通过分析最先进的高速缓存模型,我们提供了证据表明该模型已经学习超出可以利用解释的简单统计数据的概念。我们的工作为系统ML模型的解读提供了第一步,并突出了这一新兴研究区域的承诺和挑战。
translated by 谷歌翻译
由摩尔定律驱动的计算系统性能的改善已改变了社会。由于这种硬件驱动的收益放缓,对于软件开发人员而言,专注于开发过程中的性能和效率变得更加重要。尽管几项研究表明了这种提高的代码效率的潜力(例如,与硬件相比,2倍更好的世代改进),但在实践中解锁这些收益是充满挑战的。关于算法复杂性以及硬件编码模式的相互作用的推理对于普通程序员来说可能是具有挑战性的,尤其是当与围绕开发速度和多人发展的务实约束结合使用时。本文旨在解决这个问题。我们分析了Google Code JAM竞争中的大型竞争编程数据集,并发现有效的代码确实很少见,中位数和第90%的解决方案之间的运行时间差异为2倍。我们建议使用机器学习以提示的形式自动提供规范反馈,以指导程序员编写高性能代码。为了自动从数据集中学习这些提示,我们提出了一种新颖的离散变异自动编码器,其中每个离散的潜在变量代表了不同的代码编辑类别,从而提高了性能。我们表明,此方法代表代码效率的多模式空间比序列到序列基线更好地编辑,并生成更有效的解决方案的分布。
translated by 谷歌翻译
在节点分类任务中,异常和过天性是两个可能损害图形卷积神经网络(GCN)性能的两个问题。异种源于问题是指模型无法处理异构节点属于不同类别的异细则图;过度问题是指模型的退化性能随着越来越多的层。这两个看似无关的问题大多是独立研究的,但最近有近期解决一个问题可能有益于另一个问题的经验证据。在这项工作中,除了经验观察之外,我们的目标是:(1)从统一的理论角度分析异常和过天际上的问题,(2)确定两个问题的共同原因,(3)提出简单但有效的解决策略共同的原因。在我们的理论分析中,我们表明异通源性和过天际上问题的共同原因 - 即节点的相对程度及其异常级别 - 触发连续层中的节点表示,以“移动”更靠近原始决策边界,这增加了某些约束下节点标签的错误分类率。理论上我们显示:(1)具有高异味的节点具有更高的错误分类率。 (2)即使在异常的情况下,节点邻域中的程度差异也可以影响节点表示的运动并导致“伪异性”情况,这有助于解释过度处理。 (3)允许在消息传递期间肯定的阳性而且负面信息可以有助于抵消两个问题的常见原因。基于我们的理论见解,我们提出了对GCN架构的简单修改(即学习程度校正和签名消息),我们表明他们在9个网络上缓解了HeteOlephily和过天际上的问题。
translated by 谷歌翻译
Despite a sea of interpretability methods that can produce plausible explanations, the field has also empirically seen many failure cases of such methods. In light of these results, it remains unclear for practitioners how to use these methods and choose between them in a principled way. In this paper, we show that for even moderately rich model classes (easily satisfied by neural networks), any feature attribution method that is complete and linear--for example, Integrated Gradients and SHAP--can provably fail to improve on random guessing for inferring model behaviour. Our results apply to common end-tasks such as identifying local model behaviour, spurious feature identification, and algorithmic recourse. One takeaway from our work is the importance of concretely defining end-tasks. In particular, we show that once such an end-task is defined, a simple and direct approach of repeated model evaluations can outperform many other complex feature attribution methods.
translated by 谷歌翻译
Explainability has become a central requirement for the development, deployment, and adoption of machine learning (ML) models and we are yet to understand what explanation methods can and cannot do. Several factors such as data, model prediction, hyperparameters used in training the model, and random initialization can all influence downstream explanations. While previous work empirically hinted that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we measure the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors (hyperparameters) (inputs to generate saliency-based Es or Ys). We discover that Y's relative direct influence on E follows an odd pattern; the influence is higher in the lowest-performing models than in mid-performing models, and it then decreases in the top-performing models. We believe our work is a promising first step towards providing better guidance for practitioners who can make more informed decisions in utilizing these explanations by knowing what factors are at play and how they relate to their end task.
translated by 谷歌翻译
We investigate whether three types of post hoc model explanations--feature attribution, concept activation, and training point ranking--are effective for detecting a model's reliance on spurious signals in the training data. Specifically, we consider the scenario where the spurious signal to be detected is unknown, at test-time, to the user of the explanation method. We design an empirical methodology that uses semi-synthetic datasets along with pre-specified spurious artifacts to obtain models that verifiably rely on these spurious training signals. We then provide a suite of metrics that assess an explanation method's reliability for spurious signal detection under various conditions. We find that the post hoc explanation methods tested are ineffective when the spurious artifact is unknown at test-time especially for non-visible artifacts like a background blur. Further, we find that feature attribution methods are susceptible to erroneously indicating dependence on spurious signals even when the model being explained does not rely on spurious artifacts. This finding casts doubt on the utility of these approaches, in the hands of a practitioner, for detecting a model's reliance on spurious signals.
translated by 谷歌翻译
每年,在越来越复杂的多种域名,包括GO,Poker和Starcraft II在内的著名示例中都能达到专家级的性能。这种快速的进步伴随着相应的需求,以更好地了解这种代理如何实现这种绩效,以实现其安全的部署,确定局限性并揭示其改善它们的潜力。在本文中,我们从以性能为中心的多种学习中退后一步,而是将注意力转向代理行为分析。我们介绍了一种模型 - 反应方法,用于使用变异推理在多种基因域中发现行为簇,以学习关节和局部代理水平的行为层次结构。我们的框架没有对代理的基础学习算法的假设,不需要访问其潜在状态或模型,并且可以使用完全离线观察数据进行培训。我们说明了我们方法在联合和地方代理层面上对行为的耦合理解的有效性,在整个培训过程中检测行为变更点,发现核心行为概念(例如,那些促进更高回报的核心行为概念)的有效性,并证明了方法的可扩展性高维的多基金会木叶控制结构域。
translated by 谷歌翻译
智能决策支持(IDS)系统利用人工智能技术来产生通过任务的决策阶段引导人类用户的建议。但是,关键挑战是IDS系统并不完美,并且在复杂的真实方案中可能会产生不正确的输出或者无法完全工作。可解释的AI规划领域(XAIP)寻求开发技巧,使得顺序决策的决策使AI系统更可扩展到最终用户。批判性地,在将XAIP技术应用于IDS系统的情况下,已经假设计划员提出的计划始终是最佳的,因此建议作为对用户的决策支持建议的动作或计划始终是正确的。在这项工作中,我们研究了与非强大IDS系统的新手用户交互 - 偶尔推荐错误动作的互动,并且在用户习惯于其指导后可能会变得无法使用。我们介绍了一种新颖的解释类型,基于基于划分的基于规划的IDS系统的解释,可以使用有关推荐行动将有所贡献的子群的信息来补充传统的IDS输出。我们展示基于子群的解释导致改善用户任务性能,提高用户辨别最佳和次优ID的能力,是用户的首选,并在IDS失败的情况下启用更强大的用户性能
translated by 谷歌翻译
Superhuman神经网络代理如alphazero是什么?这个问题是科学和实际的兴趣。如果强神经网络的陈述与人类概念没有相似之处,我们理解他们的决定的忠实解释的能力将受到限制,最终限制了我们可以通过神经网络解释来实现的。在这项工作中,我们提供了证据表明,人类知识是由alphapero神经网络获得的,因为它在国际象棋游戏中列车。通过探究广泛的人类象棋概念,我们在alphazero网络中显示了这些概念的时间和地点。我们还提供了一种关注开放游戏的行为分析,包括来自国际象棋Grandmaster Vladimir Kramnik的定性分析。最后,我们开展了初步调查,观察alphazero的表现的低级细节,并在线提供由此产生的行为和代表性分析。
translated by 谷歌翻译
Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. In this work, we propose an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. We find that reliance, solely, on visual assessment can be misleading. Through extensive experiments we show that some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, and debugging the model. We interpret our findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single-layer convolutional neural network supports our experimental findings 2 . * Work done during the Google AI Residency Program. 2 All code to replicate our findings will be available here: https://goo.gl/hBmhDt 3 We refer here to the broad category of visualization and attribution methods aimed at interpreting trained models. These methods are often used for interpreting deep neural networks particularly on image data.
translated by 谷歌翻译